Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Clearer error for SDPA when explicitely requested #28006

Merged
merged 2 commits into from
Jan 16, 2024

Conversation

fxmarty
Copy link
Contributor

@fxmarty fxmarty commented Dec 13, 2023

As per title, partially fixes #28003.

@fxmarty fxmarty requested review from ArthurZucker, amyeroberts and LysandreJik and removed request for ArthurZucker December 13, 2023 14:58
Copy link
Collaborator

@amyeroberts amyeroberts left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!

f"{cls.__name__} does not support an attention implementation through torch.nn.functional.scaled_dot_product_attention yet. Please open an issue on GitHub to "
"request support for this architecture: https://github.com/huggingface/transformers/issues/new"
f"{cls.__name__} does not support an attention implementation through torch.nn.functional.scaled_dot_product_attention yet. Please request the"
' support for this architecture: https://github.com/huggingface/transformers/issues/28005. If you believe this error is a bug, please open an issue in Transformers GitHub repository and load your model with the argument `attn_implementation="eager"` meanwhile.'
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wondering if we should include a specific code fix in the error message here as it's easy to get out-of-sync with the actual code / currently recommended fix

Copy link
Collaborator

@amyeroberts amyeroberts Dec 13, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, sorry, I wasn't clear. I meant I don't think it's advisable to have and load your model with the argument attn_implementation="eager" meanwhile aa this suggestion might not always be correct

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hum as of transformers==4.36 this would always work. #28003 needs to be properly fixed anyway. I'll think about it but having a clearer error message to easy fix is helpful in the short term I think

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree that it's useful in the short term, the point is more that it's hard to maintain in the long term. It's easy to forget that this is here, we don't go back to update and users default to a fix which is no longer recommended without us knowing.

Ultimately, it's just an error message. I don't feel super strongly about this. If you do think the fix is better to have I'm happy for it to be merged as-is.

Copy link
Collaborator

@ArthurZucker ArthurZucker left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+! for a code fix, and let's split in 2 lines

@huggingface huggingface deleted a comment from github-actions bot Jan 15, 2024
@amyeroberts
Copy link
Collaborator

@fxmarty Are you happy for me to merge?

@fxmarty
Copy link
Contributor Author

fxmarty commented Jan 16, 2024

sure @amyeroberts !

@amyeroberts amyeroberts merged commit 02f8738 into huggingface:main Jan 16, 2024
21 checks passed
wgifford pushed a commit to wgifford/transformers that referenced this pull request Jan 21, 2024
AjayP13 pushed a commit to AjayP13/transformers that referenced this pull request Jan 22, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
4 participants